16 research outputs found

    Quality scalability aware watermarking for visual content

    Get PDF
    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking

    Visual attention-based image watermarking

    Get PDF
    Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model

    Global motion compensated visual attention-based video watermarking

    Get PDF
    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking

    On robustness against JPEG2000: a performance evaluation of wavelet-based watermarking techniques

    Get PDF
    With the emergence of new scalable coding standards, such as JPEG2000, multimedia is stored as scalable coded bit streams that may be adapted to cater network, device and usage preferences in multimedia usage chains providing universal multimedia access. These adaptations include quality, resolution, frame rate and region of interest scalability and achieved by discarding least significant parts of the bit stream according to the scalability criteria. Such content adaptations may also affect the content protection data, such as watermarks, hidden in the original content. Many wavelet-based robust watermarking techniques robust to such JPEG2000 compression attacks are proposed in the literature. In this paper, we have categorized and evaluated the robustness of such wavelet-based image watermarking techniques against JPEG2000 compression, in terms of algorithmic choices, wavelet kernel selection, subband selection, or watermark selection using a new modular framework. As most of the algorithms use a different set of parametric combination, this analysis is particularly useful to understand the effect of various parameters on the robustness under a common platform and helpful to design any such new algorithm. The analysis also considers the imperceptibility performance of the watermark embedding, as robustness and imperceptibility are two main watermarking properties, complementary to each other

    2D+t Wavelet Domain VideoWatermarking

    Get PDF
    A novel watermarking framework for scalable coded video that improves the robustness against quality scalable compression is presented in this paper. Unlike the conventional spatial-domain (t + 2D) water-marking scheme where the motion compensated temporal filtering (MCTF) is performed on the spatial frame-wise video data to decompose the video, the proposed framework applies the MCTF in the wavelet domain (2D + t) to generate the coefficients to embed the watermark. Robustness performances against scalable content adaptation, such asMotion JPEG 2000, MC-EZBC, or H.264-SVC, are reviewed for various combinations of motion compensated 2D+ t + 2D using the proposed framework. The MCTF is improved by modifying the update step to follow themotion trajectory in the hierarchical temporal decomposition by using directmotion vector fields in the update step and implied motion vectors in the prediction step. The results show smaller embedding distortion in terms of both peak signal to noise ratio and flickering metrics compared to frame-by-frame video watermarking while the robustness against scalable compression is improved by using 2D + t over the conventional t + 2D domain video watermarking, particularly for blind watermarking schemes where the motion is estimated from the watermarked video

    Video Based Technology for Ambient Assisted Living: A review of the literature

    No full text
    Ambient assisted living (AAL) has the ambitious goal of improving the quality of life and maintaining independence of older and vulnerable people through the use of technology. Most of the western world will see a very large increase in the number of older people within the next 50 years with limited resources to care for them. AAL is seen as a promising alternative to the current care models and consequently has attracted lots of attention. Recently, a number of researchers have developed solutions based on video cameras and computer vision systems with promising results. However, for the domain to reach maturity, several challenges need to be faced, including the development of systems that are robust in the real-world and are accepted by users, carers and society. In this literature review paper we present a comprehensive survey of the scope of the domain, the existing technical solutions and the challenges to be faced
    corecore